
AI has forced healthcare leaders into a precarious position. While many were still debating whether the technology should be used, its value was so clear to others that many teams began using it long before it was formally vetted, governed or secured.
Clinicians have been pasting notes into ChatGPT to save time, operations teams have been testing agent-based chatbots to streamline workflows, and data scientists are integrating SaaS tools into pipelines. The boundless potential and versatility of AI means it has no roadmap—it simply arrived one day in a browser tab.
It’s this tension between organic experimentation and disciplined governance that sat at the heart of a recent Health System CIO webinar, Cyber Strategies for Securing the AI Influx.
What we sketched out over the course of the conversation was a path from “AI everywhere, unmanaged” to “AI enabled, and with purpose”. In this article, I’ll share insights and advice from the webinar on how health systems can regain control and make AI a strategic asset rather than a security hazard.
From “because it’s there” to “because it’s right”
The first trap that most organizations—healthcare or otherwise—fall into is adopting AI just because it’s available.
The panel likened it to the early days of telemedicine during COVID, when tools were going live very quickly but governance, integration and long-term risk were often after-thoughts. This is the scenario every CISO fears: confidential data drifts into AI tools and begins influencing clinical or operational decisions without proper checks. As a result, you’re not only putting Protected Health Information (PHI) at risk, but also everything from staffing models to strategic plans and, most critically, uptime and patient safety.
So before you switch an AI tool on, ask where its outputs will show up in workflows and what happens if it’s wrong. Even well-meaning automated actions, such as AI-assisted account lockouts, can’t be allowed to derail something as time-sensitive as surgery or push clinicians into unsafe workarounds.
It’s not about extremes either. When you “block everything,” you kill innovation; when you “allow everything,” you kill your ability to control. So, aim for the middle: adaptive policies, enterprise-controlled AI and clear guardrails so you can protect data and still move fast.
From the “Department of No” to the “Department of Know”
If AI is forcing anything to evolve, it’s governance culture.
The old model most of us recognize is to view security as the “department of no”—the place where requests go in and are delayed long enough that people eventually stop asking. That doesn’t work in healthcare, where clinical teams will push forward regardless and the stakes are much higher than a delayed product launch.
Instead of being known for blocking, imagine security as the place where people go for understanding: the “department of know(ledge)”. Security teams that know where data lives, how it flows between systems, which AI tools are touching it and on what terms.
That calls for a model where security owns around a third of risk decisions, with compliance and data governance sharing the rest between them. A “33 percent rule” like this ensures no single business function is judge, jury and executioner. It moves the organization away from a “the CISO will solve it” mentality and toward a genuinely shared view of AI risk.
The conversation then shifts. Less “can we do this?” and more “what needs to be true for us to do this safely?”
“Security shouldn’t be dictating all components, doing the blocking, setting the strategy, doing everything around that. You need to have other stakeholders at the table.”
— Steven Ramirez, VP & Chief Information Security & Technology Officer (CISTO), Renown Health
You can’t secure what you pretend not to see
After years of the focus being on digital transformation, AI is pushing healthcare into a new phase: data transformation and governance. AI is only as strong as the data underneath it, making hygiene, ownership and classification non-negotiable.
And while many organizations still avoid full data discovery, remember that AI will learn from whatever it can access. So, ignoring pockets of exposure doesn’t make you safer; it just ensures you discover problems later, when they’re harder to contain.
Protecting AI starts with protecting data; you need to know where sensitive information is at rest, in motion and in use, and apply consistent controls across all instances. That requires real partnership between security and data teams, because just as you can’t protect what you don’t understand, data leaders can’t drive AI safely without getting a grip on risk, compliance and privacy.
Tabletop exercise sessions that simulate AI security incidents in real time can help improve both risk awareness and data quality. But more broadly, it’s about moving from gatekeeping to enabling, with security providing data owners with the visibility and guardrails to make informed decisions about how AI interacts with their data.
Partnering with users beats blocking them every time, because the people closest to the workflow know where the edge cases are and they’ll surface issues early if they see security as a collaborator rather than an obstacle.
“I think AI is kind of forcing our hand, as we did with the digital transformation, to more of a data transformation, making sure we have more governance and ownership in those respective areas.”
— Steven Ramirez, VP & Chief Information Security & Technology Officer (CISTO), Renown Health
AI as a threat multiplier, and a defensive ally
AI isn’t just changing how healthcare delivers care, but also how attackers operate, with AI-driven spear-phishing already outperforming traditional campaigns, and nation-state actors layering AI across the kill chain, from recon to payload.
So, security teams have to respond in kind, using AI defensively to watch behavior, learn what “normal” looks like and spot subtle anomalies in near real time. Instead of relying on disconnected toolsets, you need an integrated digital ecosystem: platforms that talk to each other seamlessly, support unified policies and anchor AI inside your core EHR.
And as governance tightens to match the urgency of the times, oversight is moving from quarterly to monthly or even weekly, as AI pilots accelerate, with the goal of evaluating new tools quickly without abandoning consistency.
The webinar ended with the panel offering four simple moves you can follow:
- Don’t just patch; think about compensating controls: Ask what stands between you and a serious incident if a new AI-enabled attack path is exploited tomorrow.
- Be a storyteller and partner, not a hall monitor: To keep your seat at the table, you have to help the business see how secure AI can actually move it forward.
- Get a handle on shadow and SaaS AI sprawl: Know what tools people are using, how data flows through them and where AI functionality has been introduced, and put controls in place before, not after, a data loss event.
- Shift from reactive to proactive: Enable the business, maintain oversight and treat AI as an ongoing, collaborative program, not a one-off project.
Ultimately, evolving from the “Department of No” to the “Department of Know” is a shift that will enable you to turn the AI influx from a source of chaos into a capability you can steer toward safer care, tighter control and smarter innovation.
“Good data governance and good data protection strategies help in the cleanliness of data, making sure we have good data sources that we’re tracking. And it really makes sure that everybody’s doing their homework on making sure that the data stewards and data owners are really attributed to that.”
— Steven Ramirez, VP & Chief Information Security & Technology Officer (CISTO), Renown Health
Watch the on-demand webinar to hear how top cyber leaders are moving beyond “no” to become true innovation partners. Learn to enforce zero trust protection and gain the real-time visibility needed to secure the future of care.
Going to HIMSS 2026? Get ready to see the future of clinical security and grab a front-row seat for a live demo at the Netskope Booth #10107. Keep up with everything Netskope has going on at HIMSS here.

ブログを読む